in 2019, popular mechanics asked me to write a 'decade in review' article about how AI evolved during the 2010s. the resulting piece was more of an explainer of AI buzzwords that were relatively new to me at the time than a glimpse into my personal opinions on any of these technologies. over the past four years, while the technologies behind automated bias and mega-dataset processing have only changed a little, the role of AI in human life is coming into sharper focus. and since this is a blog post, i can just come out and say it: AI fuckin' sucks, guys.
when journalists become 'established' on a beat, strangers on the internet start sending story ideas out of the blue. sometimes the idea is tangentially related to something that interested you anyway and it all comes together in glorious harmony, like when a random PR email about IoT-enabled buoys dovetails with a mini-obsession on my part with FCC airwave regulation to illustrate the increasingly crowded radio space in north america. most of the time, though, the pitches tech journalists get are chewed-up corporate drivel. interviews are always proposed with the CEO or founder--a sure sign that the executive, not a media expert, is the one designing the pitch. the very last person you want to ask about why a technology matters is the person who believes the right choice of words will make them the next tech billionaire. when the product in question is even slightly AI-adjacent, all the above gets worse: the executive's expectations for publicity and profit are even higher, while their understanding of how their own product actually works tends to gradually approach zero.
given these circumstances, it may not be surprising that i've never even followed up on an unsolicited AI pitch, let alone turned one into an actual story. in fact, the pitches i have received on the subject regularly remind me that we're still in the phase of AI evolution where anyone with enough money can pay a programmer to slap an algorithm on an ill-gotten dataset and sell subscriptions to whatever it churns out. while i'm probably never going to write an *official* article about all these garbage ideas and why they suck, it occurred to me recently that it might be worth explaining why that will never happen, even if only to myself.
so, as much out of spite as intellectual curiosity, i'm compiling a list of the worst AI-related business ideas I've heard over the past four years. some of them were pitched to me directly; others came from big PR newsletters, credulous tech rags, and tumblr. all of them elicit emotions in me that could not possibly have existed in a time before AI, so thanks for that. please enjoy these horrible, free ideas, and please pray with me to robot satan that not one of them ever successfully disrupts shit.
the EKG-headband-chatbot-therapist. finally, a technology that replaces expensive, human-error-riddled concepts like 'therapy' and 'empathy' with highly monetizable chatGPT. this one is only harmless as long as people see through the scam. but mental health care is expensive and inaccessible, so i worry about a future where garbage chat apps get hooked up to mental health surveillance channels to further pathologize and criminalize addiction and mental illness. large language models (LLMs) like the one that powers chatGPT also come with fun built-in vulnerabilities that are only starting to come to light. attackers can jailbreak the algorithm to mess with the bot's instructions--for example, by overriding safety features that keep it from encouraging self-harm, or by instructing the bot to imitate an abusive person (or to demand credit card information).
AI-generated foraging guides written by nonexistent authors with nonexistent qualifications. AI has been "writing" books for half a decade already, but i personally didn't understand how dangerous that could be until i heard about this particular genre. because if there's one thing you want to trust a large language model to explain to you, it's the difference between toxic and edible mushrooms. while i haven't personally heard of anyone dying because they thought the foraging guide they bought had been fact-checked by a human being, it seems like that's the only possible outcome of selling this mush as a cheaper alternative to genuine expertise. in the words of Alexis Nikole Nelson (@blackforager), "i'm just worried that someone who doesn't know any better is going to poison themselves."
AI business books. books written by supposed AI business experts sell AI itself as an "exponential revenue driver". while many clearly buy into their own hype, my impression is that some of these dorks think they're only exploiting other executives, which is a noble pursuit. the problem is that we live on a planet suffering under the crushing "growth" of unrestrained capitalism. books like these help executives continue to delude themselves that data is immaterial, which means AI is magic that makes money out of theoretical numbers without hurting anyone. just like these authors, the execs and business hopefuls who buy these books see the economy itself as a bottomless font of profit and personal glory. that makes it a lot easier for everyone invovled to cordon off their expertise around the "business side" of AI, relegating human and environmental costs to footnotes in their own journeys to success.
AI assembly lines. an AI-first company is one that relies on an algorithm, but also, inevitably, on anonymous, undercompensated human labor. Mary L. Gray and Siddharth Suri's Ghost Work from 2019 explored the growing ranks of AI laborers--the humans workers who label datasets, flag mistakes, and otherwise provide common-sense checks to the AI's overconfident statistical analysis. they found that the field is much bigger and more diverse than they'd expected--and that worker protections are virtually nonexistent.
bias-multiplying surveillance and security systems. this thinking removes accountability from the human chain of command and creating technological deniability for violent treatment of unfairly feared groups. products like these are typically sold to business owners and cops--more of that good old-fashioned consolidation of power.
minority language control. linguistics has a long history of chewing up Indigenous and other oppressed groups' languages for the sake of intellectual exercise. AI hasn't changed that so much as made the process faster, less labor-intensive, and potentially much more profitable. openAI's translation app, whisper, was trained on thousands of hours of audio footage scraped from the web, including over a thousand hours of Maori language--all without endorsement, let alone input, from actual Maori people. the result of the scraping and analysis--a mediocre translation app--is hardly as important as the reaction of the speaker population. Maori ethicist and academic Karaitaiana Taiuru told Eco-Business, "Data is like our land and natural resources...If Indigenous peoples don't have sovereignty of their own data, they will simply be re-colonised in this information society."
machine learning drones. as if enabling remote murder weren't depressing enough, some drones now have AI-enabled navigation systems. these weapons learn from new environments and even make decisions in the air--where, of course, no one can physically stop them from carrying out unsupervised statistical robot justice. this is the most extreme example i can think of that demonstrates how AI can be used to create artifical boundaries between the human decision-maker and the humans and environment impacted by that decision. when WWII generals wanted bombs deployed, they at least had to hand off immediate culpability to the human pilots who carried out those orders. AI drones, and whatever other autonomous self-teaching weapons get churned out in the future, are the lethal endpoint of bias automation so far.